Efficient Mixed-Norm Regularization: Algorithms and Safe Screening Methods
نویسندگان
چکیده
Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1/lq norm with q > 1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due to the inherent structure of the l1/lq-regularization. Existing work deals with special cases including q = 2,∞, and they can not be easily extended to the general case. In this paper, we propose an efficient algorithm based on the accelerated gradient method for solving the l1/lq-regularized problem, which is applicable for all values of q larger than 1, thus significantly extending existing work. One key building block of the proposed algorithm is the l1/lq-regularized Euclidean projection (EP1q). Our theoretical analysis reveals the key properties of EP1q and illustrates why EP1q for the general q is significantly more challenging to solve than the special cases. Based on our theoretical analysis, we develop an efficient algorithm for EP1q by solving two zero finding problems. To further improve the efficiency of solving large dimensional l1/lq regularized problems, we propose an efficient and effective “screening” method which is able to quickly identify the inactive groups, i.e., groups that have 0 components in the solution. This may lead to substantial reduction in the number of groups to be entered to the optimization. An appealing feature of our screening method is that the data set needs to be scanned only once to run the screening. Compared to that of solving the l1/lq-regularized problems, the computational cost of our screening test is negligible. The key of the proposed screening method is an accurate sensitivity analysis of the dual optimal solution when the regularization parameter varies. Experimental results demonstrate the efficiency of the proposed algorithm.
منابع مشابه
Safe Subspace Screening for Nuclear Norm Regularized Least Squares Problems
Nuclear norm regularization has been shown very promising for pursing a low rank matrix solution in various machine learning problems. Many efforts have been devoted to develop efficient algorithms for solving the optimization problem in nuclear norm regularization. Solving it for large-scale matrix variables, however, is still a challenging task since the complexity grows fast with the size of...
متن کاملروشهای تجزیه مقادیر منفرد منقطع و تیخونوف تعمیمیافته در پایدارسازی مسئله انتقال به سمت پائین
The methods applied to regularization of the ill-posed problems can be classified under “direct” and “indirect” methods. Practice has shown that the effects of different regularization techniques on an ill-posed problem are not the same, and as such each ill-posed problem requires its own investigation in order to identify its most suitable regularization method. In the geoid computations witho...
متن کاملEfficient Minimization Methods of Mixed l2-l1 and l1-l1 Norms for Image Restoration
Image restoration problems are often solved by finding the minimizer of a suitable objective function. Usually this function consists of a data-fitting term and a regularization term. For the least squares solution, both the data-fitting and the regularization terms are in the 2 norm. In this paper, we consider the least absolute deviation (LAD) solution and the least mixed norm (LMN) solution....
متن کاملA Primal Dual - Interior Point Framework for Using the L1-Norm or the L2-Norm on the Data and Regularization Terms of Inverse Problems
Maximum A Posteriori (MAP) estimates in inverse problems are often based on quadratic formulations, corresponding to a Least Squares fitting of the data and to the use of the L2 norm on the regularization term. While the implementation of this estimation is straightforward and usually based on the Gauss Newton method, resulting estimates are sensitive to outliers, and spatial distributions of t...
متن کاملEfficient Online and Batch Learning Using Forward Backward Splitting
We describe, analyze, and experiment with a framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem that trades off minimization of a regularization term while keeping close proximity to the result...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1307.4156 شماره
صفحات -
تاریخ انتشار 2013